Optimizing Fixed-Size Stochastic Controllers for POMDPs
نویسندگان
چکیده
In this paper, we discuss a new approach that represents POMDP policies as finite-state controllers and formulates the optimal policy of a desired size as a nonlinear program (NLP). This new representation allows a wide range of powerful nonlinear programming algorithms to be used to solve POMDPs. Although solving the NLP optimally is often intractable, the results we obtain using an off-theshelf optimization method are competitive with state-of-theart POMDP algorithms. Our approach is simple to implement and it opens up promising research directions for solving POMDPs using nonlinear programming methods.
منابع مشابه
Optimizing Memory-Bounded Controllers for Decentralized POMDPs
We present a memory-bounded optimization approach for solving infinite-horizon decentralized POMDPs. Policies for each agent are represented by stochastic finite state controllers. We formulate the problem of optimizing these policies as a nonlinear program, leveraging powerful existing nonlinear optimization techniques for solving the problem. While existing solvers only guarantee locally opti...
متن کاملDual Formulations for Optimizing Dec-POMDP Controllers
Decentralized POMDP is an expressive model for multiagent planning. Finite-state controllers (FSCs)—often used to represent policies for infinite-horizon problems—offer a compact, simple-to-execute policy representation. We exploit novel connections between optimizing decentralized FSCs and the dual linear program for MDPs. Consequently, we describe a dual mixed integer linear program (MIP) for...
متن کاملBounded Policy Iteration for Decentralized POMDPs
We present a bounded policy iteration algorithm for infinite-horizon decentralized POMDPs. Policies are represented as joint stochastic finite-state controllers, which consist of a local controller for each agent. We also let a joint controller include a correlation device that allows the agents to correlate their behavior without exchanging information during execution, and show that this lead...
متن کاملStochastic Local Search for POMDP Controllers
The search for finite-state controllers for partially observable Markov decision processes (POMDPs) is often based on approaches like gradient ascent, attractive because of their relatively low computational cost. In this paper, we illustrate a basic problem with gradient-based methods applied to POMDPs, where the sequential nature of the decision problem is at issue, and propose a new stochast...
متن کاملWhat’s Worth Memorizing: Attribute-based Planning for DEC-POMDPs
Current algorithms for decentralized partially observable Markov decision processes (DEC-POMDPs) require a large amount of memory to produce high quality plans. To combat this, existing methods optimize a set of finite-state controllers with an arbitrary amount of fixed memory. While this works well for some problems, in general, scalability and solution quality remain limited. As an alternativ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2007